EN FR
EN FR


Section: New Results

Stochastic control of jump processes and stochastic differential games

Participants : Agnès Sulem [in collaboration with B. Øksendal (Oslo University) and T. Zhang (Manchester University)] , John Joseph Absalom Hosking.

Stochastic control under model uncertainty

In [58] , we study optimal stochastic control problems with jumps under model uncertainty. We rewrite such problems as (zero-sum) stochastic differential games of forward-backward stochastic differential equations. We prove general stochastic maximum principles for such games, both in the zero-sum case (finding conditions for saddle points) and for the non-zero sum games (finding conditions for Nash equilibria). We then apply these results to study optimal portfolio and consumption problems under model uncertainty. We combine the optimality conditions given by the stochastic maximum principles with Malliavin calculus to obtain a set of equations which determine the optimal strategies.

In [45] , we consider some robust optimal portfolio problems for markets modeled by (possibly non-Markovian) Itô–Lévy processes. Mathematically the situation can be described as a stochastic differential game, where one of the players (the agent) is trying to find the portfolio which maximizes the utility of her terminal wealth, while the other player ("the market") is controlling some of the unknown parameters of the market (e.g. the underlying probability measure, representing a model uncertainty problem) and is trying to minimize this maximal utility of the agent. This leads to a worst case scenario control problem for the agent.

In the Markovian case such problems can be studied using the Hamilton-Jacobi-Bellman-Isaacs (HJBI) equation, but these methods do not work in the non-Markovian case. We approach the problem by transforming it to a stochastic differential game for backward stochastic differential equations (BSDE game). Using comparison theorems for BSDEs with jumps we arrive at criteria for the solution of such games, in the form of a kind of non-Markovian analogue of the HJBI equation. The results are illustrated by examples.

Singular stochastic control

In [59] , A. Sulem and B. Øksendal study partial information, possibly non-Markovian, singular stochastic control of Itô–Lévy processes and obtain general maximum principles. The results are used to find connections between singular stochastic control, reflected BSDEs and optimal stopping in the partial information case. As an application we give an explicit solution to a class of optimal stopping problems with finite horizon and partial information. Singular control of SPDEs.

In [57] , A. Sulem, B. Øksendal and T. Zhang study general singular control problems for random fields given by a stochastic partial differential equation (SPDE). They show that under some conditions the optimal singular control can be identified with the solution of a coupled system of SPDE and a kind of reflected backward SPDE (RBSPDE). They also establish existence and uniqueness of solutions of RBSPDEs.

Optimal control with delay

In [44] , we study optimal control problems for (time-)delayed stochastic differential equations with jumps. We establish sufficient and necessary stochastic stochastic maximum principles for an optimal control of such systems. The associated adjoint processes are shown to satisfy a (time-) advanced backward stochastic differential equation (ABSDE). Several results on existence and uniqueness of such ABSDEs are shown. The results are illustrated by an application to optimal consumption from a cash flow with delay.

In [48] , we will prove a sufficient necessary stochastic maximum principles for the optimal control of SPDEs with delay and study associated time-advanced backward stochastic partial differential equations.

Stochastic differential games

In [55] , J. Hosking has constructed a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we use a spike-variation approach with adjoint representation techniques, analogous to that of S. Peng in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations (BSDEs), and second-order adjoint processes of a fist type are defined as solutions to certain BSDEs. Second order adjoint processes of a second type are defined as solutions of backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field BSDEs. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations.